10 research outputs found
Do Language Models Understand Anything? On the Ability of LSTMs to Understand Negative Polarity Items
In this paper, we attempt to link the inner workings of a neural language
model to linguistic theory, focusing on a complex phenomenon well discussed in
formal linguis- tics: (negative) polarity items. We briefly discuss the leading
hypotheses about the licensing contexts that allow negative polarity items and
evaluate to what extent a neural language model has the ability to correctly
process a subset of such constructions. We show that the model finds a relation
between the licensing context and the negative polarity item and appears to be
aware of the scope of this context, which we extract from a parse tree of the
sentence. With this research, we hope to pave the way for other studies linking
formal linguistics to deep learning.Comment: Accepted to the EMNLP workshop "Analyzing and interpreting neural
networks for NLP
Transparency at the Source: Evaluating and Interpreting Language Models With Access to the True Distribution
We present a setup for training, evaluating and interpreting neural language
models, that uses artificial, language-like data. The data is generated using a
massive probabilistic grammar (based on state-split PCFGs), that is itself
derived from a large natural language corpus, but also provides us complete
control over the generative process. We describe and release both grammar and
corpus, and test for the naturalness of our generated data. This approach
allows us to define closed-form expressions to efficiently compute exact lower
bounds on obtainable perplexity using both causal and masked language
modelling. Our results show striking differences between neural language
modelling architectures and training objectives in how closely they allow
approximating the lower bound on perplexity. Our approach also allows us to
directly compare learned representations to symbolic rules in the underlying
source. We experiment with various techniques for interpreting model behaviour
and learning dynamics. With access to the underlying true source, our results
show striking differences and outcomes in learning dynamics between different
classes of words.Comment: EMNLP Findings 202
Analysing Neural Language Models: Contextual Decomposition Reveals Default Reasoning in Number and Gender Assignment
Extensive research has recently shown that recurrent neural language models
are able to process a wide range of grammatical phenomena. How these models are
able to perform these remarkable feats so well, however, is still an open
question. To gain more insight into what information LSTMs base their decisions
on, we propose a generalisation of Contextual Decomposition (GCD). In
particular, this setup enables us to accurately distil which part of a
prediction stems from semantic heuristics, which part truly emanates from
syntactic cues and which part arise from the model biases themselves instead.
We investigate this technique on tasks pertaining to syntactic agreement and
co-reference resolution and discover that the model strongly relies on a
default reasoning effect to perform these tasks.Comment: To appear at CoNLL201
Structural Persistence in Language Models : Priming as a Window into Abstract Language Representations
Acknowledgments We would like to thank the anonymous reviewers for their extensive and thoughtful feedback and suggestions, which greatly improved our work, as the action editor for his helpful guidance. We would also like to thank members of the ILLC past and present for their useful comments and feedback, specifically, Dieuwke Hupkes, Mario Giulianelli, Sandro Pezzelle, and Ece Takmaz. Arabella Sinclair worked on this project while affiliated with the University of Amsterdam. The project has received funding from the European Research Council (ERC) under the European Union’s Horizon 2020 research and innovation programme (grant agreement No. 819455).Peer reviewedPublisher PD
DecoderLens:Layerwise Interpretation of Encoder-Decoder Transformers
In recent years, many interpretability methods have been proposed to help interpret the internal states of Transformer-models, at different levels of precision and complexity. Here, to analyze encoder-decoder Transformers, we propose a simple, new method: DecoderLens. Inspired by the LogitLens (for decoder-only Transformers), this method involves allowing the decoder to cross-attend representations of intermediate encoder layers instead of using the final encoder output, as is normally done in encoder-decoder models. The method thus maps previously uninterpretable vector representations to human-interpretable sequences of words or symbols. We report results from the DecoderLens applied to models trained on question answering, logical reasoning, speech recognition and machine translation. The DecoderLens reveals several specific subtasks that are solved at low or intermediate layers, shedding new light on the information flow inside the encoder component of this important class of models
Beyond the imitation game: Quantifying and extrapolating the capabilities of language models
Language models demonstrate both quantitative improvement and new qualitative capabilities with increasing scale. Despite their potentially transformative impact, these new capabilities are as yet poorly characterized. In order to inform future research, prepare for disruptive new model capabilities, and ameliorate socially harmful effects, it is vital that we understand the present and near-future capabilities and limitations of language models. To address this challenge, we introduce the Beyond the Imitation Game benchmark (BIG-bench). BIG-bench currently consists of 204 tasks, contributed by 442 authors across 132 institutions. Task topics are diverse, drawing problems from linguistics, childhood development, math, common-sense reasoning, biology, physics, social bias, software development, and beyond. BIG-bench focuses on tasks that are believed to be beyond the capabilities of current language models. We evaluate the behavior of OpenAI's GPT models, Google-internal dense transformer architectures, and Switch-style sparse transformers on BIG-bench, across model sizes spanning millions to hundreds of billions of parameters. In addition, a team of human expert raters performed all tasks in order to provide a strong baseline. Findings include: model performance and calibration both improve with scale, but are poor in absolute terms (and when compared with rater performance); performance is remarkably similar across model classes, though with benefits from sparsity; tasks that improve gradually and predictably commonly involve a large knowledge or memorization component, whereas tasks that exhibit "breakthrough" behavior at a critical scale often involve multiple steps or components, or brittle metrics; social bias typically increases with scale in settings with ambiguous context, but this can be improved with prompting
Beyond the Imitation Game: Quantifying and extrapolating the capabilities of language models
Language models demonstrate both quantitative improvement and new qualitative
capabilities with increasing scale. Despite their potentially transformative
impact, these new capabilities are as yet poorly characterized. In order to
inform future research, prepare for disruptive new model capabilities, and
ameliorate socially harmful effects, it is vital that we understand the present
and near-future capabilities and limitations of language models. To address
this challenge, we introduce the Beyond the Imitation Game benchmark
(BIG-bench). BIG-bench currently consists of 204 tasks, contributed by 442
authors across 132 institutions. Task topics are diverse, drawing problems from
linguistics, childhood development, math, common-sense reasoning, biology,
physics, social bias, software development, and beyond. BIG-bench focuses on
tasks that are believed to be beyond the capabilities of current language
models. We evaluate the behavior of OpenAI's GPT models, Google-internal dense
transformer architectures, and Switch-style sparse transformers on BIG-bench,
across model sizes spanning millions to hundreds of billions of parameters. In
addition, a team of human expert raters performed all tasks in order to provide
a strong baseline. Findings include: model performance and calibration both
improve with scale, but are poor in absolute terms (and when compared with
rater performance); performance is remarkably similar across model classes,
though with benefits from sparsity; tasks that improve gradually and
predictably commonly involve a large knowledge or memorization component,
whereas tasks that exhibit "breakthrough" behavior at a critical scale often
involve multiple steps or components, or brittle metrics; social bias typically
increases with scale in settings with ambiguous context, but this can be
improved with prompting.Comment: 27 pages, 17 figures + references and appendices, repo:
https://github.com/google/BIG-benc